Downsampling and feature extraction are essential procedures for 3D point cloud understanding. Existing methods are limited by the inconsistent point densities of different parts in the point cloud. In this work, we analyze the limitation of the downsampling stage and propose the pre-abstraction group-wise window-normalization module. In particular, the window-normalization method is leveraged to unify the point densities in different parts. Furthermore, the group-wise strategy is proposed to obtain multi-type features, including texture and spatial information. We also propose the pre-abstraction module to balance local and global features. Extensive experiments show that our module performs better on several tasks. In segmentation tasks on S3DIS (Area 5), the proposed module performs better on small object recognition, and the results have more precise boundaries than others. The recognition of the sofa and the column is improved from 69.2% to 84.4% and from 42.7% to 48.7%, respectively. The benchmarks are improved from 71.7%/77.6%/91.9% (mIoU/mAcc/OA) to 72.2%/78.2%/91.4%. The accuracies of 6-fold cross-validation on S3DIS are 77.6%/85.8%/91.7%. It outperforms the best model PointNeXt-XL (74.9%/83.0%/90.3%) by 2.7% on mIoU and achieves state-of-the-art performance. The code and models are available at https://github.com/DBDXSS/Window-Normalization.git.
translated by 谷歌翻译
背景和目标:现有的医学图像分割的深度学习平台主要集中于完全监督的细分,该分段假设可以使用充分而准确的像素级注释。我们旨在开发一种新的深度学习工具包,以支持对医学图像分割的注释有效学习,该学习可以加速并简单地开发具有有限注释预算的深度学习模型,例如,从部分,稀疏或嘈杂的注释中学习。方法:我们提出的名为Pymic的工具包是用于医学图像分割任务的模块化深度学习平台。除了支持开发高性能模型以进行全面监督分割的基本组件外,它还包含几个高级组件,这些高级组件是针对从不完善的注释中学习的几个高级组件,例如加载带注释和未经通知的图像,未经通知的,部分或无效的注释图像的损失功能,以及多个网络之间共同学习的培训程序。Pymic构建了Pytorch框架,并支持半监督,弱监督和噪声的学习方法用于医学图像分割。结果:我们介绍了基于PYMIC的四个说明性医学图像细分任务:(1)在完全监督的学习上实现竞争性能; (2)半监督心脏结构分割,只有10%的训练图像; (3)使用涂鸦注释弱监督的分割; (4)从嘈杂的标签中学习以进行胸部X光片分割。结论:Pymic工具包易于使用,并促进具有不完美注释的医学图像分割模型的有效开发。它是模块化和灵活的,它使研究人员能够开发出低注释成本的高性能模型。源代码可在以下网址获得:https://github.com/hilab-git/pymic。
translated by 谷歌翻译
3D医学图像分割中卷积神经网络(CNN)的成功取决于大量的完全注释的3D体积,用于训练,这些训练是耗时且劳动力密集的。在本文中,我们建议在3D医学图像中只有7个点注释分段目标,并设计一个两阶段弱监督的学习框架PA-SEG。在第一阶段,我们采用大地距离变换来扩展种子点以提供更多的监督信号。为了在培训期间进一步处理未注释的图像区域,我们提出了两种上下文正则化策略,即多视图条件随机场(MCRF)损失和差异最小化(VM)损失,其中第一个鼓励具有相似特征的像素以具有一致的标签,第二个分别可以最大程度地减少分段前景和背景的强度差异。在第二阶段,我们使用在第一阶段预先训练的模型获得的预测作为伪标签。为了克服伪标签中的噪音,我们引入了一种自我和交叉监测(SCM)策略,该策略将自我训练与跨知识蒸馏(CKD)结合在主要模型和辅助模型之间,该模型从彼此生成的软标签中学习。在公共数据集的前庭造型瘤(VS)分割和脑肿瘤分割(BRAT)上的实验表明,我们在第一阶段训练的模型优于现有的最先进的弱监督方法,并在使用SCM之后,以提供其他scm来获得其他额外的scm培训,与Brats数据集中完全有监督的对应物相比,该模型可以实现竞争性能。
translated by 谷歌翻译
Skull stripping is a crucial prerequisite step in the analysis of brain magnetic resonance images (MRI). Although many excellent works or tools have been proposed, they suffer from low generalization capability. For instance, the model trained on a dataset with specific imaging parameters cannot be well applied to other datasets with different imaging parameters. Especially, for the lifespan datasets, the model trained on an adult dataset is not applicable to an infant dataset due to the large domain difference. To address this issue, numerous methods have been proposed, where domain adaptation based on feature alignment is the most common. Unfortunately, this method has some inherent shortcomings, which need to be retrained for each new domain and requires concurrent access to the input images of both domains. In this paper, we design a plug-and-play shape refinement (PSR) framework for multi-site and lifespan skull stripping. To deal with the domain shift between multi-site lifespan datasets, we take advantage of the brain shape prior, which is invariant to imaging parameters and ages. Experiments demonstrate that our framework can outperform the state-of-the-art methods on multi-site lifespan datasets.
translated by 谷歌翻译
最近,利用卷积神经网络(CNNS)和变压器的深度学习表明,令人鼓舞的医学图像细分导致结果。但是,他们仍然具有挑战性,以实现有限的培训的良好表现。在这项工作中,我们通过在CNN和变压器之间引入交叉教学,为半监控医学图像分割提供了一个非常简单但有效的框架。具体而言,我们简化了从一致性正则化的经典深度共同训练交叉教学,其中网络的预测用作伪标签,直接端到端监督其他网络。考虑到CNN和变压器之间的学习范例的差异,我们在CNN和变压器之间引入了交叉教学,而不是使用CNNS。在公共基准测试中的实验表明,我们的方法优于八个现有的半监督学习方法,只需更简单的框架。值得注意的是,这项工作可能是第一次尝试将CNN和变压器组合以进行半监督的医学图像分割,并在公共基准上实现有前途的结果。该代码将发布:https://github.com/hilab-git/sl4mis。
translated by 谷歌翻译
整个腹部器官分割起着腹部损伤诊断,放射治疗计划的重要作用,并随访。然而,划定肿瘤学家所有腹部器官手工费时且非常昂贵的。近日,深学习型医学图像分割显示,以减少人工划定努力的潜力,但它仍然需要培训的大型精细注释的数据集。虽然在这个任务很多努力,但仍然覆盖整个腹部区域与整个腹腔脏器分割准确和详细的注解几个大的图像数据集。在这项工作中,我们建立了一个大型的\ textit【W】孔腹部\ textit {} OR甘斯\ textit {d} ataset(\ {textit WORD})的算法研究和临床应用的发展。此数据集包含150个腹部CT体积(30495片),并且每个卷具有16个机关用细像素级注释和涂鸦基于稀疏注释,这可能是与整个腹部器官注释最大数据集。状态的最先进的几个分割方法是在该数据集进行评估。而且,我们还邀请了临床肿瘤学家修改模型预测测量深度学习方法和真实的肿瘤学家之间的差距。我们进一步介绍和评价这一数据集一个新的基于涂鸦,弱监督分割。该工作腹部多器官分割任务提供了新的基准,这些实验可以作为基准对未来的研究和临床应用的发展。 https://github.com/HiLab-git/WORD:代码库和数据集将被释放
translated by 谷歌翻译
3D计算机断层扫描扫描的肺结核检测在高效的肺癌筛查中起着至关重要的作用。尽管使用CNNS的基于锚的探测器获得的SOTA性能,但是它们需要预定的锚定参数,例如锚点的尺寸,数量和纵横比,并且在处理具有大量尺寸的肺结节时具有有限的鲁棒性。为了克服这些问题,我们提出了一种基于3D球体表示的中心点匹配的检测网络,该检测网络是无锚的,并且自动预测结节的位置,半径和偏移,而无需手动设计结节/锚参数。 SCPM-Net由两种新颖组件组成:球体表示和中心点匹配。首先,为了匹配临床实践中的结节注释,我们用所提出的边界球体替换常用的边界框,以表示具有质心,半径和3D空间局部偏移的结节。引入兼容的基于球体的交叉口损耗功能,以稳定且有效地培训肺结核检测网络。其次,我们通过设计正中心点选择和匹配过程来赋予网络锚定,自然地丢弃预定的锚箱。在线硬示例挖掘和重新聚焦损失随后使CPM过程能够更加强大,导致更准确的点分配和级别不平衡的缓解。此外,为了更好地捕获用于检测的空间信息和3D上下文,我们建议熔化具有特征提取器的多级空间坐标映射,并将它们与3D挤压和激励的关注模块相结合。 Luna16数据集上的实验结果表明,与肺结核检测的现有锚和锚定方法相比,我们所提出的框架达到卓越的性能。
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译